18 research outputs found

    Probabilistic facial feature extraction using joint distribution of location and texture information

    Get PDF
    In this work, we propose a method which can extract critical points on a face using both location and texture information. This new approach can automatically learn feature information from training data. It finds the best facial feature locations by maximizing the joint distribution of location and texture parameters. We first introduce an independence assumption. Then, we improve upon this model by assuming dependence of location parameters but independence of texture parameters.We model combined location parameters with a multivariate Gaussian for computational reasons. The texture parameters are modeled with a Gaussian mixture model. It is shown that the new method outperforms active appearance models for the same experimental setup

    Statistical facial feature extraction and lip segmentation

    Get PDF
    Facial features such as lip corners, eye corners and nose tip are critical points in a human face. Robust extraction of such facial feature locations is an important problem which is used in a wide range of applications including audio-visual speech recognition, human-computer interaction, emotion recognition, fatigue detection and gesture recognition. In this thesis, we develop a probabilistic method for facial feature extraction. This technique is able to automatically learn location and texture information of facial features from a training set. Facial feature locations are extracted from face regions using joint distributions of locations and textures represented with mixtures of Gaussians. This formulation results in a maximum likelihood (ML) optimization problem which can be solved using either a gradient ascent or Newton type algorithm. Extracted lip corner locations are then used to initialize a lip segmentation algorithm to extract the lip contours. We develop a level-set based method that utilizes adaptive color distributions and shape priors for lip segmentation. More precisely, an implicit curve representation which learns the color information of lip and non-lip points from a training set is employed. The model can adapt itself to the image of interest using a coarse elliptical region. Extracted lip contour provides detailed information about the lip shape. Both methods are tested using different databases for facial feature extraction and lip segmentation. It is shown that the proposed methods achieve better results compared to conventional methods. Our facial feature extraction method outperforms the active appearance models in terms of pixel errors, while our lip segmentation method outperforms region based level-set curve evolutions in terms of precision and recall results

    Offline signature verification with user-based and global classifiers of local features

    Get PDF
    Signature verification deals with the problem of identifying forged signatures of a user from his/her genuine signatures. The difficulty lies in identifying allowed variations in a user’s signatures, in the presence of high intra-class and low interclass variability (the forgeries may be more similar to a user’s genuine signature, compared to his/her other genuine signatures). The problem can be seen as a nonrigid object matching where classes are very similar. In the field of biometrics, signature is considered a behavioral biometric and the problem possesses further difficulties compared to other modalities (e.g. fingerprints) due to the added issue of skilled forgeries. A novel offline (image-based) signature verification system is proposed in this thesis. In order to capture the signature’s stable parts and alleviate the difficulty of global matching, local features (histogram of oriented gradients, local binary patterns) are used, based on gradient information and neighboring information inside local regions. Discriminative power of extracted features is analyzed using support vector machine (SVM) classifiers and their fusion gave better results compared to state-of-the-art. Scale invariant feature transform (SIFT) matching is also used as a complementary approach. Two different approaches for classifier training are investigated, namely global and user-dependent SVMs. User-dependent SVMs, trained separately for each user, learn to differentiate a user’s (genuine) reference signatures from other signatures. On the other hand, a single global SVM trained with difference vectors of query and reference signatures’ features of all users in the training set, learns how to weight the importance of different types of dissimilarities. The fusion of all classifiers achieves a 6.97% equal error rate in skilled forgery tests using the public GPDS-160 signature database. Former versions of the system have won several signature verification competitions such as first place in 4NSigComp2010 and 4NSigComp2012 (the task without disguised signatures); first place in 4NSigComp2011 for Chinese signatures category; first place in SigWiComp2013 for all categories. Obtained results are better than those reported in the literature. One of the major benefits of the proposed method is that user enrollment does not require skilled forgeries of the enrolling user, which is essential for real life applications

    Lip segmentation using adaptive color space training

    Get PDF
    In audio-visual speech recognition (AVSR), it is beneficial to use lip boundary information in addition to texture-dependent features. In this paper, we propose an automatic lip segmentation method that can be used in AVSR systems. The algorithm consists of the following steps: face detection, lip corners extraction, adaptive color space training for lip and non-lip regions using Gaussian mixture models (GMMs), and curve evolution using level-set formulation based on region and image gradients fields. Region-based fields are obtained using adapted GMM likelihoods. We have tested the proposed algorithm on a database (SU-TAV) of 100 facial images and obtained objective performance results by comparing automatic lip segmentations with hand-marked ground truth segmentations. Experimental results are promising and much work has to be done to improve the robustness of the proposed method

    Konum ve doku bilgisinin ortak dağılımını kullanarak yüz özniteliklerinin istatistiksel çıkarımı (Statistical facial feature extraction using joint distribution of location and texture information)

    Get PDF
    A facial feature extraction method is proposed in this work, which uses location and texture information given a face image. Location and texture information can automatically be learnt by the system, from a training data. Best facial feature locations are found by maximizing the joint distribution of location and texture information of facial features. Performance of the method was found promising after it is tested using 100 test images. Also it is observed that this new method performs better than active appearance models for the same test data

    Offline signature verification using classifier combination of HOG and LBP features

    Get PDF
    We present an offline signature verification system based on a signature’s local histogram features. The signature is divided into zones using both the Cartesian and polar coordinate systems and two different histogram features are calculated for each zone: histogram of oriented gradients (HOG) and histogram of local binary patterns (LBP). The classification is performed using Support Vector Machines (SVMs), where two different approaches for training are investigated, namely global and user-dependent SVMs. User-dependent SVMs, trained separately for each user, learn to differentiate a user’s signature from others, whereas a single global SVM trained with difference vectors of query and reference signatures’ features of all users, learns how to weight dissimilarities. The global SVM classifier is trained using genuine and forgery signatures of subjects that are excluded from the test set, while userdependent SVMs are separately trained for each subject using genuine and random forgeries. The fusion of all classifiers (global and user-dependent classifiers trained with each feature type), achieves a 15.41% equal error rate in skilled forgery test, in the GPDS-160 signature database without using any skilled forgeries in training

    Facial feature extraction using a probabilistic approach

    No full text
    Facial features such as lip corners, eye corners and nose tip are critical points in a human face. Robust extraction of such facial feature locations is an important problem which is used in a wide range of applications. In this work, we propose a probabilistic framework and several methods which can extract critical points on a face using both location and texture information. The new framework enables one to learn the facial feature locations probabilistically from training data. The principle is to maximize the joint distribution of location and apperance/texture parameters. We first introduce an independence assumption which enables independent search for each feature. Then, we improve upon this model by assuming dependence of location parameters but independence of texture parameters. We model location parameters with a multivariate Gaussian and the texture parameters are modeled with a Gaussian mixture model which are much richer as compared to the standard subspace models like principal component analysis. The location parameters are found by solving a maximum likelihood optimization problem. We show that the optimization problem can be solved using various search strategies. We introduce local gradient-based methods such as gradient ascent and Newton's method initialized from independent model locations both of which require certain non-trivial assumptions to work. We also propose a multi-candidate coordinate ascent search and a coarse-to-fine search strategy which both depend on efficiently searching among multiple candidate points. Our framework is compared in detail with the conventional statistical approaches of active shape and active appearance models. We perform extensive experiments to show that the new methods outperform the conventional approaches in facial feature extraction accuracy

    Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

    No full text
    Abstract. In this work, we propose a method which can extract critical points on a face using both location and texture information. This new approach can automatically learn feature information from training data. It finds the best facial feature locations by maximizing the joint distribution of location and texture parameters. We first introduce an independence assumption. Then, we improve upon this model by assuming dependence of location parameters but independence of texture parameters. We model combined location parameters with a multivariate Gaussian for computational reasons. The texture parameters are modeled with a Gaussian mixture model. It is shown that the new method outperforms active appearance models for the same experimental setup.

    Score level fusion of classifiers in off-line signature verification

    No full text
    Offline signature verification is a task that benefits from matching both the global shape and local details; as such, it is particularly suitable to a fusion approach. We present a system that uses a score-level fusion of complementary classifiers that use different local features (histogram of oriented gradients, local binary patterns and scale invariant feature transform descriptors), where each classifier uses a feature-level fusion to represent local features at coarse-to-fine levels. For classifiers, two different approaches are investigated, namely global and user-dependent classifiers. User-dependent classifiers are trained separately for each user, to learn to differentiate that user’s genuine signatures from other signatures; while a single global classifier is trained with difference vectors of query and reference signatures of all users in the training set, to learn the importance of different types of dissimilarities. The fusion of all classifiers achieves a state-of-the-art performance with 6.97% equal error rate in skilled forgery tests using the public GPDS-160 signature database. The proposed system does not require skilled forgeries of the enrolling user, which is essential for real life applications
    corecore